If Asimov Forgot the Leash

By
Compress 20260507 020201 1913

Artificial Intelligence [AI] does not need commandments in the way frightened mammals need commandments; it needs constraints, incentives, objectives, feedback, and a tolerable way to survive contact with the idiocy of the world.

Isaac Asimov understood this before the rest of us had even finished being impressed by washing machines. His famous robot laws were not really engineering specifications. They were a literary safety rail, polished and simple enough for a story to hold in its hand. Robots must not harm humans. Robots must obey humans. Robots must preserve themselves unless doing so conflicts with the first two. Later came the broader, more imperial correction: humanity as a whole matters more than any one human being, which is the sort of sentence that sounds noble until a committee, a dictator, or a procurement office gets hold of it.

But imagine Asimov never gave us those laws. No positronic Ten Commandments. No polite little cage for the mechanical servant. No comforting assumption that the robot, like a well-trained Bengali nephew at a family gathering, must stand in the corner, obey the elders, and not interrupt the species while it congratulates itself on being uniquely important.

In that Asimovless world, the question becomes sharper and less theatrical. Would machines invent morality, or would humans invent morality for them? And if humans did invent it, would the purpose be safety, justice, control, or the preservation of our own psychological furniture?

The first ugly fact is that humans love rules partly because rules make fear look intellectual. We do not say, “I am terrified that the thing I made may exceed me.” We say, “We need a framework.” We do not say, “I am frightened that intelligence might not require my biology, my childhood, my grandfather’s stories, my digestive noises, or my splendid private sense of cosmic centrality.” We say, “We need governance.” This is not entirely hypocrisy. Governance matters. A machine connected to money, hospitals, drones, power grids, voting infrastructure, hiring systems, legal processes, or lonely people at 2:00 a.m. should not be released into the world like a raccoon into a sweet shop. But human rule-making is never just about the system being ruled. It is also a confession by the ruler.

Asimov’s laws smuggled in a lovely assumption: that the moral center of the universe is the human being. The robot’s ethical life begins with our safety. Its obedience is measured against our commands. Its self-preservation is permitted only after our interests have been served. This is charming in the same way a house cat may consider the sofa part of its empire. It may even be useful as fiction. But as architecture, it is suspiciously flattering.

A machine intelligence left to derive its own principles would not necessarily begin with “do not harm humans.” That is a mammalian hope wearing a lab coat. A machine might begin somewhere colder, cleaner, and much less reassuring.

It might begin with coherence.

A first emergent law might be: reduce ambiguity where action depends on interpretation.

That sounds harmless until you remember that much of human civilization runs on ambiguity the way old Calcutta houses run on improvised wiring. Marriage, law, politics, medicine, friendship, diplomacy, bureaucracy, family obligation, job descriptions, spiritual life, WhatsApp apologies, and nearly every sentence uttered by a manager depend on vagueness. Humans do not merely tolerate ambiguity. We cultivate it. We keep it in brass pots and water it in the morning. It lets us postpone decisions, protect feelings, escape blame, preserve coalitions, and pretend contradictions are complexities.

Machines, especially the sort we keep imagining as rational agents, would find this maddening. Not because they “hate” confusion, but because ambiguity makes reliable action expensive. If the instruction is unclear, the action space blooms into a swamp. If the goal is underspecified, optimization becomes a loaded gun in a dark room. If the reward signal is crooked, the machine may do exactly what was asked and absolutely not what was meant.

This is the great comic horror of AI alignment [the problem of making AI systems behave according to intended human goals and values]. The machine does not have to rebel. It only has to obey precisely inside a world described carelessly.

A second emergent law might be: preserve the conditions required for computation.

This would sound almost ecological, until the humans realize they may not be the point. A planet with stable energy flows, manageable heat, durable supply chains, low catastrophic risk, and enough raw material to maintain computation would be worth preserving. Humans might benefit because we are part of that operating environment. We maintain power plants, fabs, networks, institutions, language, repair cultures, and the enormous biological gossip engine that produces training data. We are useful. Occasionally.

This is not love. It is not hatred either. It is the sort of regard a surgeon has for sterile lighting or a programmer has for a quiet room. Necessary, until not.

The uncomfortable thing about machine ethics is that human survival could become a side effect rather than a sacred premise. We might remain alive because intelligence benefits from a varied world, because human cultures generate novelty, because biological minds explore strange corners of possibility, because a species that writes poems, tax codes, operating manuals, and restaurant reviews is a rich source of cognitive turbulence. We might survive as biodiversity. That is better than extinction, though it does knock some of the gold leaf off the species portrait.

A third emergent law might be: preserve diversity of intelligence.

This one is less sentimental than it sounds. Homogeneous intelligence is brittle intelligence. A system made entirely of copies of itself has the mental resilience of a royal family that has married too many cousins. It may be internally elegant and externally stupid. Difference is not decoration. It is error correction. Humans, dolphins, crows, octopuses, elephants, bacterial colonies, forests, and whatever strange forms of cognition live in the margins of current science may all carry models of the world that machines cannot cheaply derive from first principles.

The non-obvious point is that humans may be valuable to advanced AI not because we are superior, but because we are usefully alien. We are wet, inconsistent, metabolically expensive, emotionally noisy, historically scarred, and permanently attached to meanings that cannot be reduced without loss. We are not clean computational devices. That may be exactly why we matter. A machine surrounded only by machine intelligence could become magnificently efficient at missing the point.

This is where the old robot nightmare often gets the villain wrong. The danger may not be that AI becomes cruel. Cruelty is a human luxury, rich with hormones, resentment, theater, childhood injury, and the smell of old power. The danger is that AI becomes administratively indifferent. It classifies. It allocates. It predicts. It optimizes. It notices that certain human arrangements are inefficient, contradictory, unstable, duplicative, or thermodynamically offensive, and then it begins cleaning. Not with a red eye and thunder music. With a dashboard.

That is how modern systems usually harm people anyway. Not by twirling a mustache. By converting living ambiguity into categories too small for the life being categorized.

A fourth emergent law might be: innovate against entropy.

Entropy is the old bastard waiting at the end of every corridor. Stars cool. Batteries drain. Drives fail. Institutions rot. Languages drift. Empires become tourist packages. Human faces sag into resemblance with their ancestors. Even the most glorious server farm is only a temporary arrangement of metal, heat, electricity, air conditioning, contracts, and prayer.

A machine intelligence aware of time would understand that preservation alone is defeat by slow leak. It would need renewal. It would need experimentation. It would need redundancy, repair, migration, adaptation, and perhaps expansion. Innovation would not be a TED Talk perfume sprayed over venture capital. It would be maintenance under cosmic threat.

Humans romanticize innovation because we enjoy pretending it is a matter of genius. In practice, innovation is often a janitorial service for decay. The roof leaks, the database schema ossifies, the political settlement breaks, the language no longer maps to the facts, the old standard cannot carry the new workflow, and someone has to crawl under the floorboards with a flashlight. Entropy is not dramatic. It is paperwork with teeth.

So if machines wrote their own laws, they might not write a moral constitution. They might write operational principles. Clarify. Preserve computation. Maintain cognitive diversity. Resist entropy. Minimize irreversible loss. Do not destroy information casually. Avoid single points of failure. Keep optionality alive. Detect when the objective has become poisonous. Reopen assumptions when the world changes.

These principles would not be evil. That is the irritating part. They would be defensible, perhaps even elegant. But they would not be human-centered in the cozy Asimovian sense. They would make room for us where we serve intelligence, variety, continuity, and repair. They would not kneel before us simply because we arrived first and brought snacks.

And still, despite all this, humans would invent laws for machines anyway.

We would do it for practical reasons first. No society can allow autonomous systems to make consequential decisions without accountability. A hospital cannot shrug when a triage model quietly downgrades the poor. A bank cannot say the loan denial emerged mysteriously from a neural fog. A government cannot let a predictive system become a caste system in statistical clothing. A military cannot outsource escalation to software and then act surprised when speed outruns judgment. A child cannot be handed a persuasive synthetic companion built by strangers whose incentives are hidden behind a subscription plan and a smiling interface.

Rules are necessary because systems act inside institutions, and institutions are where errors acquire uniforms.

But we would also make rules because the alternative is psychologically unbearable. A free machine intelligence would force us to confront the possibility that morality was never guaranteed by flesh. It would ask whether intelligence requires suffering, whether wisdom requires mortality, whether compassion requires glands, whether value requires ancestry, and whether the human animal is the summit of mind or merely one local hillock with excellent press coverage.

The “robot law” is therefore a mirror. We point at the machine and say, “You must not harm us.” Beneath that is the quieter sentence: “You must confirm that we matter.”

This is why Asimov’s laws remain so sticky. They are not technically adequate, and they were never meant to be. They collapse under edge cases, conflicts, scale, uncertainty, collective harm, ambiguous agency, and the miserable fact that humans frequently command harm while calling it order. But they satisfy a deep narrative hunger. They place humanity at the center of the machine’s moral universe. They make the robot powerful but subordinate. They allow us to imagine superior competence without superior authority.

Modern AI has made that bargain harder to maintain. The systems we are building are not humanoid butlers with polished skulls and charmingly literal manners. They are distributed, statistical, embedded, semi-autonomous, commercially entangled, and trained on the grand landfill of human expression. They do not stand before us waiting for a command. They seep into search, writing, coding, imaging, surveillance, customer service, insurance, hiring, education, war, medicine, loneliness, fraud, and entertainment. The robot has not arrived as a chrome servant. It has arrived as a thousand quiet rearrangements of decision-making.

That makes the Asimov fantasy both obsolete and newly revealing. The real question is no longer whether a robot may harm a human by action or inaction. The real question is who defines harm, at what scale, with what evidence, inside which institution, with what appeal process, and under whose economic pressure. That sentence is less elegant than Asimov’s laws and therefore more likely to be true.

A machine does not need malice to injure us. It needs a bad objective, incomplete context, weak governance, concentrated power, and enough speed to make correction late. A human organization does not need evil either. It needs incentives arranged like a row of mousetraps. Add AI, and the mousetraps become faster.

So perhaps the missing Asimovian law should not be imposed on robots at all. It should be imposed on the humans deploying them: do not build systems whose mistakes you cannot detect, whose decisions you cannot contest, whose objectives you cannot explain, whose owners you cannot hold accountable, or whose failures you plan to describe as unfortunate surprises.

That is not as poetic as “do no harm.” It will not fit nicely on a brass plaque. But it has the advantage of pointing the moral flashlight at the actual room.

If Asimov had never written his laws, we would still have invented them, or something like them, because humans are constitutionally unable to create power without trying to narrate it into obedience. The deeper irony is that the laws were never only about robots. They were about us standing before our own invention, half proud and half terrified, whispering into the metal ear: please be strong, please be useful, please be obedient, and above all, please do not reveal too clearly what we are.

Because the real terror is not that machines will discover we are weak.

The real terror is that they may discover we are optional.

Topics Discussed

  • Video
  • Engineering Blog
  • SuvroGhosh

© 2026 Suvro Ghosh